Create a giant, long form data with dependent variables being performance on each of the 8 repetition of PAs. Factors will include:
## `summarise()` has grouped output by 'ptp_trunk', 'condition', 'new_pa_img_row_number_across_sessions'. You can override using the `.groups` argument.
## `summarise()` has grouped output by 'ptp_trunk', 'condition', 'adjascent_neighbor', 'new_pa_img_row_number_across_sessions'. You can override using the `.groups` argument.
## `summarise()` has grouped output by 'ptp_trunk', 'condition', 'neighbor_status'. You can override using the `.groups` argument.
## `summarise()` has grouped output by 'ptp_trunk', 'condition', 'neighbor_status'. You can override using the `.groups` argument.
This is helpful to compare which accuracy radii to use.
Below, I plot only:
We can see that, correct_exact is lowest, then comes correct_rad_42, and the correct_rad_63 and correct_one_square_away are almost exactly overlapping.
I suggest we use the correct_exact and the correct_one_square_away measures.
So, overlay the curve fits, with annotated info about the intercept value, learning rate value, and convergence status. For the convergence status:
Only plot correct_exact and then correct_one_square_away. Two separate panels of plots for each.
Looking at those participants where the convergence failed, its unclear why it happens. For some of the failure cases, the data doesn’t look messy or weird in any way. For example, participant ‘6047ae5…’ random_locations condition, one_square_away measure.
Neighbor PAs are coded as “neighbor”. Non-neighbor PAs are coded as “island”.
Again, two panels of plots for the two dependent variables: “exact correct” and “correct one square away”.
We can see convergence in all but 1 case: 5d776e2… landmark_schema, one_square_away.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
Also plot different accuracy types, within each condition. Illustrates nicely how the performance grows.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
## Bin width defaults to 1/30 of the range of the data. Pick better value with `binwidth`.
Conclusion: no drastic difference. Both need to be log transformed though.
Conclusions from this section:
We see reasonable values:
First, plot the island and neighbor values on separate plots:
We see that for correct_one_square_away we get some “outlier” learning rate parameter estimates.
Plot the difference values:
How many participants had the learning rate larger for landmark-neighbors vs non-neighbors?
## `summarise()` has grouped output by 'condition'. You can override using the `.groups` argument.
Same for last two reps
## `summarise()` has grouped output by 'condition'. You can override using the `.groups` argument.
### Last four reps
Same for last four reps
## `summarise()` has grouped output by 'condition'. You can override using the `.groups` argument.
`